Boodles X KaurMaxwell: Navigating AI Governance

insights - 2 December 2025

Key Takeaways from the Boodles X KaurMaxwell Artificial Intelligence (AI) Reception

As AI continues to play a growing role in redefining workflows, fine-tuning data-led decision-making, and the formulation of entirely new product ideas or business models, legal systems attempting to govern the use of AI are striving to keep pace.


At Boodles, one of London’s most historic jewellery stores, KaurMaxwell welcomed a gathering of industry professionals to discuss the current regulatory regime in the EU and the UK, the regulatory bodies responsible for governing this emerging technology, and the potential conflicts between certain existing rights and AI.


The Building Blocks


At the heart of the intimate venue, KaurMaxwell’s Head of Technology led the interactive session by unpacking the fundamentals of AI. In essence, data is the essential raw material that powers any AI system to learn, make decisions, and perform tasks. When considering generative AI models, the quality and quantity of the input data directly determine the reliability and fairness of their output. It is therefore vital that businesses utilising generative AI models curate well-structured, segmented, and ethically sourced datasets to mitigate the risk of any unwanted claims.


The EU AI Act


The European Union (EU) AI Act was then brought into focus. The Act aims to regulate AI systems in the EU, establishing a comprehensive framework based on the level of risk posed by AI systems (Unacceptable, High Risk, Limited Risk, Minimal Risk) to promote safe and ethical AI development while mitigating associated risks.


AI systems deemed “Unacceptable” under the Act include those that conduct social scoring of individuals based on their behaviours. Providers and deployers of “High Risk” AI systems are subject to strict obligations, including the need to conduct conformity assessments and have a quality management system in place, which includes the systematic documentation of policies and procedures for regulatory compliance. Some businesses offering their services to the EU may assume they are not working with or integrating “High Risk” AI systems, but fail to realise that even a CV-sorting AI tool for employment could fall into that category. The more common use cases of generative AI, such as AI Chatbots, generally fall under the “Limited Risk” category, and businesses utilising these chatbots have specific disclosure obligations to ensure users are aware when interacting with them.


Reconciliation with Existing Laws


The conversation turned to some of the challenges of reconciling the EU AI Act with existing laws and regulations.


For example, the General Data Protection Regulation (GDPR) grants individuals the right to data erasure, but it does not define erasure in the context of AI. Most AI systems rely on machine learning, which makes the deletion of data particularly problematic. Once data has been used to train a particular model, it may not be technically possible to remove the influence of a specific data point. As a result, organisations utilising such AI systems may struggle to comply with erasure requests in a manner that satisfies data protection authorities. This is why it is crucial to proactively implement the principles of privacy by design in any project or initiative that integrates AI into its technology stack. This builds trust with users and helps mitigate risks of potentially catastrophic data breaches.


The UK’s Approach


Bringing the discussion closer to home, deeper deliberation was encouraged on the differences between the seemingly static prescriptive legislative framework of the EU AI Act and the more flexible principles-based approach of the UK Government.


Instead of assigning responsibility for AI governance to a single regulator, the UK Government is empowering existing regulators, like the Medicines and Healthcare products Regulatory Agency (MHRA) and the Information Commissioner’s Office (ICO), to produce tailored regulatory guidance for specific sectors according to an identified set of five core principles, namely: (1) safety, security and robustness; (2) transparency and explainability; (3) fairness; (4) accountability and governance; and (5) contestability and redress. Apart from sector-specific regulations, court rulings are slowly shaping what is considered lawful or unlawful in the context of AI. In the recent Supreme Court judgment of Thaler (2023), it was found that an AI system cannot legally be recognised as an “inventor”; only a natural or legal person can. This underscores the principle that AI is treated as a tool and not a legal subject with rights or duties. As a result, humans and businesses behind such AI systems remain responsible and accountable for the output of AI systems.


Final Word


While the adaptability of the UK’s approach to AI governance offers space for innovation, it remains to be seen if this approach leaves regulatory gaps when compared to the EU’s more holistic framework. Nonetheless, AI-centric businesses operating in or serving the UK should take a proactive stance to manage risk, monitor regulatory developments, and embed strong governance into their AI systems by design to ensure they can scale safely and sustainably.